62 research outputs found
Genotype–phenotype mapping implications for genetic programming representation:commentary on “On the mapping of genotype to phenotype in evolutionary algorithms” by Peter A. Whigham, Grant Dick, and James Maclaurin
This comment refers to the article available at doi:10.1007/s10710-017-9288-x. Here we comment on the article “On the mapping of genotype to phenotype in evolutionary algorithms,” by Peter A. Whigham, Grant Dick, and James Maclaurin. The article reasons about analogies from molecular biology to evolutionary algorithms and discusses conditions for biological adaptations in the context of grammatical evolution, which provide a useful perspective to GP practitioners. However, the connection of the listed implications for GP is not sufficiently convincing for the reader . Therefore this commentary will (1) examine the proposed principles one by one, challenging the authors to provide more supporting evidence where felt that this was needed, and (2) propose a methodical way to GP practitioners to apply these principles when designing GP representations
Improving the Tartarus problem as a benchmark in genetic programming
For empirical research on computer algorithms, it is essential to have a set of benchmark problems on which the relative performance of different methods and their applicability can be assessed. In the majority of computational research fields there are established sets of benchmark problems; however, the field of genetic programming lacks a similarly rigorously defined set of benchmarks. There is a strong interest within the genetic programming community to develop a suite of benchmarks. Following recent surveys [7], the desirable characteristics of a benchmark problem are now better defined. In this paper the Tartarus problem is proposed as a tunably difficult benchmark problem for use in Genetic Programming. The justification for this proposal is presented, together with guidance on its usage as a benchmark
Modelling human preference in evolutionary art
Creative activities including arts are characteristic to humankind. Our understanding of creativity is limited, yet there is substantial research trying to mimic human creativity in artificial systems and in particular to produce systems that automatically evolve art appreciated by humans. We propose here to model human visual preference by a set of aesthetic measures identified through observation of human selection of images and then use these for automatic evolution of aesthetic images
Using genetic algorithms in computer vision : registering images to 3D surface model
This paper shows a successful application of genetic algorithms in computer vision. We aim at building photorealistic 3D models of real-world objects by adding textural information to the geometry. In this paper we focus on the 2D-3D registration problem: given a 3D geometric model of an object, and optical images of the same object, we need to find the precise alignment of the 2D images to the 3D model. We generalise the photo-consistency approach of Clarkson et al. who assume calibrated cameras, thus only the pose of the object in the world needs to be estimated. Our method extends this approach to the case of uncalibrated cameras, when both intrinsic and extrinsic camera parameters are unknown. We formulate the problem as an optimisation and use a genetic algorithm to find a solution. We use semi-synthetic data to study the effects of different parameter settings on the registration. Additionally, experimental results on real data are presented to demonstrate the efficiency of the method
Recommended from our members
On the effects of pseudorandom and quantum-random number generators in soft computing
In this work, we argue that the implications of pseudorandom and quantum-random number generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in soft computing until this work. We use a CPU and a QPU to generate random numbers for multiple machine learning techniques. Random numbers are employed in the random initial weight distributions of dense and convolutional neural networks, in which results show a profound difference in learning patterns for the two. In 50 dense neural networks (25 PRNG/25 QRNG), QRNG increases over PRNG for accent classification at + 0.1%, and QRNG exceeded PRNG for mental state EEG classification by + 2.82%. In 50 convolutional neural networks (25 PRNG/25 QRNG), the MNIST and CIFAR-10 problems are benchmarked, and in MNIST the QRNG experiences a higher starting accuracy than the PRNG but ultimately only exceeds it by 0.02%. In CIFAR-10, the QRNG outperforms PRNG by + 0.92%. The n-random split of a Random Tree is enhanced towards and new Quantum Random Tree (QRT) model, which has differing classification abilities to its classical counterpart, 200 trees are trained and compared (100 PRNG/100 QRNG). Using the accent and EEG classification data sets, a QRT seemed inferior to a RT as it performed on average worse by − 0.12%. This pattern is also seen in the EEG classification problem, where a QRT performs worse than a RT by − 0.28%. Finally, the QRT is ensembled into a Quantum Random Forest (QRF), which also has a noticeable effect when compared to the standard Random Forest (RF). Ten to 100 ensembles of trees are benchmarked for the accent and EEG classification problems. In accent classification, the best RF (100 RT) outperforms the best QRF (100 QRF) by 0.14% accuracy. In EEG classification, the best RF (100 RT) outperforms the best QRF (100 QRT) by 0.08% but is extremely more complex, requiring twice the amount of trees in committee. All differences are observed to be situationally positive or negative and thus are likely data dependent in their observed functional behaviour
Partially Lazy Classification of Cardiovascular Risk via Multi-way Graph Cut Optimization
Cardiovascular disease (CVD) is considered a leading cause of human mortality with rising trends worldwide. Therefore, early identification of seemingly healthy subjects at risk is a priority. For this purpose, we propose a novel classification algorithm that provides a sound individual risk prediction, based on a non-invasive assessment of retinal vascular function. so-called lazy classification methods offer reduced time complexity by saving model construction time and better adapting to newly available instances, when compared to well-known eager methodS. Lazy methods are widely used due to their simplicity and competitive performance. However, traditional lazy approaches are more vulnerable to noise and outliers, due to their full reliance on the instances' local neighbourhood for classification. In this work, a learning method based on Graph Cut Optimization called GCO mine is proposed, which considers both the local arrangements and the global structure of the data, resulting in improved performance relative to traditional lazy methodS. We compare GCO mine coupled with genetic algorithms (hGCO mine) with established lazy and eager algorithms to predict cardiovascular risk based on Retinal Vessel Analysis (RVA) data. The highest accuracy of 99.52% is achieved by hGCO mine. The performance of GCO mine is additionally demonstrated on 12 benchmark medical datasets from the UCI repository. In 8 out of 12 datasets, GCO mine outperforms its counterpartS. GCO mine is recommended for studies where new instances are expected to be acquired over time, as it saves model creation time and allows for better generalization compared to state of the art methodS
Recommended from our members
British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language
In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44%) as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL), and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy
Emergence in genetic programming:let's exploit it!
Banzhaf explores the concept of emergence and how and where it happens in genetic programming [1]. Here we consider the question: what shall we do with it? We argue that given our ultimate goal to produce genetic programming systems that solve new and difficult problems, we should take advantage of emergence to get closer to this goal
Gaining insights into road traffic data through genetic improvement
We argue that Genetic Improvement can be successfully used for enhancing road traffc data mining. This would support the relevant decision makers with extending the existing network of devices that sense and control city traffc, with the end goal of improving vehicle Flow and reducing the frequency of road accidents. Our position results from a set of preliminary observations emerging from the analysis of open access road trafic data collected in real time by the Birmingham City Council
- …